62 research outputs found

    Learning auditory space: generalization and long-term effects

    Get PDF
    Background: Previous findings have shown that humans can learn to localize with altered auditory space cues. Here we analyze such learning processes and their effects up to one month on both localization accuracy and sound externalization. Subjects were trained and retested, focusing on the effects of stimulus type in learning, stimulus type in localization, stimulus position, previous experience, externalization levels, and time. Method: We trained listeners in azimuth and elevation discrimination in two experiments. Half participated in the azimuth experiment first and half in the elevation first. In each experiment, half were trained in speech sounds and half in white noise. Retests were performed at several time intervals: just after training and one hour, one day, one week and one month later. In a control condition, we tested the effect of systematic retesting over time with post-tests only after training and either one day, one week, or one month later. Results: With training all participants lowered their localization errors. This benefit was still present one month after training. Participants were more accurate in the second training phase, revealing an effect of previous experience on a different task. Training with white noise led to better results than training with speech sounds. Moreover, the training benefit generalized to untrained stimulus-position pairs. Throughout the post-tests externalization levels increased. In the control condition the long-term localization improvement was not lower without additional contact with the trained sounds, but externalization levels were lower. Conclusion: Our findings suggest that humans adapt easily to altered auditory space cues and that such adaptation spreads to untrained positions and sound types. We propose that such learning depends on all available cues, but each cue type might be learned and retrieved differently. The process of localization learning is global, not limited to stimulus-position pairs, and it differs from externalization processes.Foundation for Science and TechnologyFEDE

    The Effect of Auditory Distraction on the Useful Field of View in Hearing Impaired Individuals and its implications for driving

    Get PDF
    This study assessed whether the increased demand of listening in hearing impaired individuals exacerbates the detrimental impact of auditory distraction on a visual task (useful field of view test), relative to normally hearing listeners. Auditory distraction negatively affects this visual task, which is linked with various driving performance outcomes. Hearing impaired and normally hearing participants performed useful field of view testing with and without a simultaneous listening task. They also undertook a cognitive test battery. For all participants, performing the visual and auditory tasks together reduced performance on each respective test. For a number of subtests, hearing impaired participants showed poorer visual task performance, though not to a statistically significant extent. Hearing impaired participants were significantly poorer at a reading span task than normally hearing participants and tended to score lower on the most visually complex subtest of the visual task in the absence of auditory task engagement. Useful field of view performance is negatively affected by auditory distraction, and hearing loss may present further problems, given the reductions in visual and cognitive task performance suggested in this study. Suggestions are made for future work to extend this study, given the practical importance of the findings

    Individual Differences in Sound-in-Noise Perception Are Related to the Strength of Short-Latency Neural Responses to Noise

    Get PDF
    Important sounds can be easily missed or misidentified in the presence of extraneous noise. We describe an auditory illusion in which a continuous ongoing tone becomes inaudible during a brief, non-masking noise burst more than one octave away, which is unexpected given the frequency resolution of human hearing. Participants strongly susceptible to this illusory discontinuity did not perceive illusory auditory continuity (in which a sound subjectively continues during a burst of masking noise) when the noises were short, yet did so at longer noise durations. Participants who were not prone to illusory discontinuity showed robust early electroencephalographic responses at 40–66 ms after noise burst onset, whereas those prone to the illusion lacked these early responses. These data suggest that short-latency neural responses to auditory scene components reflect subsequent individual differences in the parsing of auditory scenes

    A Normalization Model of Attentional Modulation of Single Unit Responses

    Get PDF
    Although many studies have shown that attention to a stimulus can enhance the responses of individual cortical sensory neurons, little is known about how attention accomplishes this change in response. Here, we propose that attention-based changes in neuronal responses depend on the same response normalization mechanism that adjusts sensory responses whenever multiple stimuli are present. We have implemented a model of attention that assumes that attention works only through this normalization mechanism, and show that it can replicate key effects of attention. The model successfully explains how attention changes the gain of responses to individual stimuli and also why modulation by attention is more robust and not a simple gain change when multiple stimuli are present inside a neuron's receptive field. Additionally, the model accounts well for physiological data that measure separately attentional modulation and sensory normalization of the responses of individual neurons in area MT in visual cortex. The proposal that attention works through a normalization mechanism sheds new light a broad range of observations on how attention alters the representation of sensory information in cerebral cortex

    Dissociated Mechanisms of Extracting Perceptual Information into Visual Working Memory

    Get PDF
    The processing mechanisms of visual working memory (VWM) have been extensively explored in the recent decade. However, how the perceptual information is extracted into VWM remains largely unclear. The current study investigated this issue by testing whether the perceptual information was extracted into VWM via an integrated-object manner so that all the irrelevant information would be extracted (object hypothesis), or via a feature-based manner so that only the target-relevant information would be extracted (feature hypothesis), or via an analogous processing manner as that in visual perception (analogy hypothesis).High-discriminable information which is processed at the parallel stage of visual perception and fine-grained information which is processed via focal attention were selected as the representatives of perceptual information. The analogy hypothesis predicted that whereas high-discriminable information is extracted into VWM automatically, fine-grained information will be extracted only if it is task-relevant. By manipulating the information type of the irrelevant dimension in a change-detection task, we found that the performance was affected and the ERP component N270 was enhanced if a change between the probe and the memorized stimulus consisted of irrelevant high-discriminable information, but not if it consisted of irrelevant fine-grained information.We conclude that dissociated extraction mechanisms exist in VWM for information resolved via dissociated processes in visual perception (at least for the information tested in the current study), supporting the analogy hypothesis

    Dissociable Influences of Auditory Object vs. Spatial Attention on Visual System Oscillatory Activity

    Get PDF
    Given that both auditory and visual systems have anatomically separate object identification (“what”) and spatial (“where”) pathways, it is of interest whether attention-driven cross-sensory modulations occur separately within these feature domains. Here, we investigated how auditory “what” vs. “where” attention tasks modulate activity in visual pathways using cortically constrained source estimates of magnetoencephalograpic (MEG) oscillatory activity. In the absence of visual stimuli or tasks, subjects were presented with a sequence of auditory-stimulus pairs and instructed to selectively attend to phonetic (“what”) vs. spatial (“where”) aspects of these sounds, or to listen passively. To investigate sustained modulatory effects, oscillatory power was estimated from time periods between sound-pair presentations. In comparison to attention to sound locations, phonetic auditory attention was associated with stronger alpha (7–13 Hz) power in several visual areas (primary visual cortex; lingual, fusiform, and inferior temporal gyri, lateral occipital cortex), as well as in higher-order visual/multisensory areas including lateral/medial parietal and retrosplenial cortices. Region-of-interest (ROI) analyses of dynamic changes, from which the sustained effects had been removed, suggested further power increases during Attend Phoneme vs. Location centered at the alpha range 400–600 ms after the onset of second sound of each stimulus pair. These results suggest distinct modulations of visual system oscillatory activity during auditory attention to sound object identity (“what”) vs. sound location (“where”). The alpha modulations could be interpreted to reflect enhanced crossmodal inhibition of feature-specific visual pathways and adjacent audiovisual association areas during “what” vs. “where” auditory attention

    Cueing listeners to attend to a target talker progressively improves word report as the duration of the cue-target interval lengthens to 2000 ms

    Get PDF
    Endogenous attention is typically studied by presenting instructive cues in advance of a target stimulus array. For endogenous visual attention, task performance improves as the duration of the cue-target interval increases up to 800 ms. Less is known about how endogenous auditory attention unfolds over time or the mechanisms by which an instructive cue presented in advance of an auditory array improves performance. The current experiment used five cue-target intervals (0, 250, 500, 1000, and 2000 ms) to compare four hypotheses for how preparatory attention develops over time in a multi-talker listening task. Young adults were cued to attend to a target talker who spoke in a mixture of three talkers. Visual cues indicated the target talker’s spatial location or their gender. Participants directed attention to location and gender simultaneously (‘objects’) at all cue-target intervals. Participants were consistently faster and more accurate at reporting words spoken by the target talker when the cue-target interval was 2000 ms than 0 ms. In addition, the latency of correct responses progressively shortened as the duration of the cue-target interval increased from 0 to 2000 ms. These findings suggest that the mechanisms involved in preparatory auditory attention develop gradually over time, taking at least 2000 ms to reach optimal configuration, yet providing cumulative improvements in speech intelligibility as the duration of the cue-target interval increases from 0 to 2000 ms. These results demonstrate an improvement in performance for cue-target intervals longer than those that have been reported previously in the visual or auditory modalities

    Measuring the Perceived Content of Auditory Objects Using a Matching Paradigm

    No full text
    Two previous studies manipulated spatial cues to alter the perceptual organization of a sound mixture containing an ambiguous sound element (a pure tone; the “target”) that could belong to two competing auditory objects (a sequential tone stream and a simultaneous harmonic complex). In both studies, the sum of the contributions of the target to the two objects was less than the physical target level in the mixture. However, many listeners had difficulties making consistent judgments about the perceptual contribution of the target to the harmonic complex. The current study used stimuli similar to those used in the previous study, but with a target made up of five tones rather than a single pure tone. In addition, listeners performed a direct matching task to indicate the perceptual contribution of the target to the competing objects rather than relying on an indirect mapping procedure. The matching task proved to be efficient and reliable. However, the complex-tone target was perceptually stronger in the harmonic complex and weaker in the sequential tone stream than in past studies. As a result, the sum of the target contributions to the two objects roughly equaled the physical target level for all tested spatial configurations, unlike in the previous studies
    corecore